1,732 research outputs found

    Anticipation in Human-Robot Cooperation: A Recurrent Neural Network Approach for Multiple Action Sequences Prediction

    Full text link
    Close human-robot cooperation is a key enabler for new developments in advanced manufacturing and assistive applications. Close cooperation require robots that can predict human actions and intent, and understand human non-verbal cues. Recent approaches based on neural networks have led to encouraging results in the human action prediction problem both in continuous and discrete spaces. Our approach extends the research in this direction. Our contributions are three-fold. First, we validate the use of gaze and body pose cues as a means of predicting human action through a feature selection method. Next, we address two shortcomings of existing literature: predicting multiple and variable-length action sequences. This is achieved by introducing an encoder-decoder recurrent neural network topology in the discrete action prediction problem. In addition, we theoretically demonstrate the importance of predicting multiple action sequences as a means of estimating the stochastic reward in a human robot cooperation scenario. Finally, we show the ability to effectively train the prediction model on a action prediction dataset, involving human motion data, and explore the influence of the model's parameters on its performance. Source code repository: https://github.com/pschydlo/ActionAnticipationComment: IEEE International Conference on Robotics and Automation (ICRA) 2018, Accepte

    Robotic Learning the Sequence of Packing Irregular Objects from Human Demonstrations

    Full text link
    We address the unsolved task of robotic bin packing with irregular objects, such as groceries, where the underlying constraints on object placement and manipulation, and the diverse objects' physical properties make preprogrammed strategies unfeasible. Our approach is to learn directly from expert demonstrations in order to extract implicit task knowledge and strategies to achieve an efficient space usage, safe object positioning and to generate human-like behaviors that enhance human-robot trust. We collect and make available a novel and diverse dataset, BoxED, of box packing demonstrations by humans in virtual reality. In total, 263 boxes were packed with supermarket-like objects by 43 participants, yielding 4644 object manipulations. We use the BoxED dataset to learn a Markov chain to predict the object packing sequence for a given set of objects and compare it with human performance. Our experimental results show that the model surpasses human performance by generating sequence predictions that humans classify as human-like more frequently than human-generated sequences.Comment: 8 pages, 7 figure

    Learning at the Ends: From Hand to Tool Affordances in Humanoid Robots

    Full text link
    One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i.e., the ability to perceive object affordances. Since modeling all the possible world interactions is unfeasible, learning from experience is required, posing the challenge of collecting a large amount of experiences (i.e., training data). Typically, a manipulative robot operates on external objects by using its own hands (or similar end-effectors), but in some cases the use of tools may be desirable, nevertheless, it is reasonable to assume that while a robot can collect many sensorimotor experiences using its own hands, this cannot happen for all possible human-made tools. Therefore, in this paper we investigate the developmental transition from hand to tool affordances: what sensorimotor skills that a robot has acquired with its bare hands can be employed for tool use? By employing a visual and motor imagination mechanism to represent different hand postures compactly, we propose a probabilistic model to learn hand affordances, and we show how this model can generalize to estimate the affordances of previously unseen tools, ultimately supporting planning, decision-making and tool selection tasks in humanoid robots. We present experimental results with the iCub humanoid robot, and we publicly release the collected sensorimotor data in the form of a hand posture affordances dataset.Comment: dataset available at htts://vislab.isr.tecnico.ulisboa.pt/, IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob 2017

    A Smart Approach to Harvest Date Forecasting

    Get PDF
    The concept of grape ripeness depends not only on the degree of enrichment of the chemical compounds in the grape and the volume of the berries, but also on the possible production purposes. The different types of maturation in individual cases are not sufficient for the decision on the harvest date. Taken together, however, they define oenological maturation times and help to harvest them. However, there are no consistent studies that correlate the chemical parameters obtained from must analysis and oenological maturation due to the nonlinearity of these two types of variables. Therefore, this work seeks to create a self-explanatory model that allows for the prediction of ideal harvest time, based on eneological parameters related to practices in new developments in knowledge acquisition and management in relational databases

    Overcoming the challenge of bunch occlusion by leaves for vineyard yield estimation using image analysis

    Get PDF
    Accurate yield estimation is of utmost importance for the entire grape and wine production chain, yet it remains an extremely challenging process due to high spatial and temporal variability in vineyards. Recent research has focused on using image analysis for vineyard yield estimation, with one of the major obstacles being the high degree of occlusion of bunches by leaves. This work uses canopy features obtained from 2D images (canopy porosity and visible bunch area) as proxies for estimating the proportion of occluded bunches by leaves to enable automatic yield estimation on non-disturbed canopies. Data was collected from three grapevine varieties, and images were captured from 1 m segments at two phenological stages (veraison and full maturation) in non-defoliated and partially defoliated vines. Visible bunches (bunch exposure; BE) varied between 16 and 64 %. This percentage was estimated using a multiple regression model that includes canopy porosity and visible bunch area as predictors, yielding a R2 between 0.70 and 0.84 on a training set composed of 70 % of all data, showing an explanatory power 10 to 43 % higher than when using the predictors individually. A model based on the combined data set (all varieties and phenological stages) was selected for BE estimation, achieving a R2 = 0.80 on the validation set. This model did not show validation metrics differences when applied on data collected at veraison or full maturation, suggesting that BE can be accurately estimated at any stage. Bunch exposure was then used to estimate total bunch area (tBA), showing low errors (< 10 %) except for the variety Arinto, which presents specific morphological traits such as large leaves and bunches. Finally, yield estimation computed from estimated tBA presented a very low error (0.2 %) on the validation data set with pooled data. However, when performed on every single variety, the simplified approach of area-to-mass conversion was less accurate for the variety Syrah. The method demonstrated in this work is an important step towards a fully automated non-invasive yield estimation approach, as it offers a solution to estimate bunches that are not visible to imaging sensorsinfo:eu-repo/semantics/publishedVersio

    Yield components detection and image-based indicators for non-invasive grapevine yield prediction at different phenological phases

    Get PDF
    Forecasting vineyard yield with accuracy is one of the most important trends of research in viticulture today. Conventional methods for yield forecasting are manual, require a lot of labour and resources and are often destructive. Recently, image-analysis approaches have been explored to address this issue. Many of these approaches encompass cameras deployed on ground platforms that collect images in proximal range, on-the-go. As the platform moves, yield components and other image-based indicators are detected and counted to perform yield estimations. However, in most situations, when image acquisition is done in non-disturbed canopies, a high fraction of yield components is occluded. The present work’s goal is twofold. Firstly, to evaluate yield components’ visibility in natural conditions throughout the grapevine’s phenological stages. Secondly, to explore single bunch images taken in lab conditions to obtain the best visible bunch attributes to use as yield indicators. In three vineyard plots of red (Syrah) and white varieties (Arinto and Encruzado), several canopy 1 m segments were imaged using the robotic platform Vinbot. Images were collected from winter bud stage until harvest and yield components were counted in the images as well as in the field. At pea-sized berries, veraison and full maturation stages, a bunch sample was collected and brought to lab conditions for detailed assessments at a bunch scale. At early stages, all varieties showed good visibility of spurs and shoots, however, the number of shoots was only highly and significantly correlated with the yield for the variety Syrah. Inflorescence and bunch occlusion reached high percentages, above 50 %. In lab conditions, among the several bunch attributes studied, bunch volume and bunch projected area showed the highest correlation coefficients with yield. In field conditions, using non-defoliated vines, the bunch projected area of visible bunches presented high and significant correlation coefficients with yield, regardless of the fruit’s occlusion. Our results show that counting yield components with image analysis in non-defoliated vines may be insufficient for accurate yield estimation. On the other hand, using bunch projected area as a predictor can be the best option to achieve that goal, even with high levels of occlusioninfo:eu-repo/semantics/publishedVersio

    Accounting for Sources of Information in Trade Fairs: Evidence from Portuguese Exhibitors

    Get PDF
    Trade fairs are important sources of information for decision making in marketing management. Currently, trade fairs are places where participants share useful data and information, while creating relationships between customers (visitors) and suppliers (exhibitors). However, only a limited number of studies have focused on the identification of the sources of information that exhibitors can provide for marketing managers at trade fairs. This study examines the importance of the different types of information resources that can be delivered by exhibitors to managers in order to transfer information about product and market trends. Based on the data from a survey of 172 Portuguese executives from different industries, the theoretical hypotheses are tested, using CFA (Confirmatory Factor Analysis). Consistent with our hypotheses, the results show that Direct Marketing techniques, such as face-to-face contacts and product/service demonstrations, are often used by exhibitors. Information in digital formats and demonstration in digital equipment (Digital Marketing) are also used in trade fairs to display information to potential customers. Additionally, the organization of parallel events (Event Marketing) during a trade fair supplements the package of activities developed by exhibitors to transmit and capture information for their companies. These results provide certain support for the importance of trade fairs in view of being a rich source of market information about not only new technological developments of products, but also major strengths and weaknesses of competitors, and future market trends, among other types of information needed for the marketing planning.info:eu-repo/semantics/publishedVersio
    • …
    corecore